home *** CD-ROM | disk | FTP | other *** search
- I am a latecomer, so forgive me if this is naive or old hat.
-
- Many of the issues that people seem to be grappling with are already
- handled by news.
-
- For example, we are talking about caching nodes. News has highly evolved
- caching capabilities -- I mean, caching is what it is all about -- both for
- TCP/IP and UUCP-based links.
-
- Someone mentioned the issue of caching and node names, apparently
- node names would have to be rewritten by the cacher or need to be made
- machine-independent in some way (?). Article IDs are guaranteed unique
- and are server-independent. The mechanism for translating article
- IDs to filenames is fast and pretty highly evolved.
-
- Oh, ugh, "Supercedes:" doesn't cut it unless the article superceding
- the old one replaces its article ID, which would probably be Bad.
-
- Expiration dates can be set with "Expires:", and sites that
- archive certain groups already do special things on "Archive-Name:".
-
- Plus news is already ultra-portable.
-
- Is the brief-connection-per-document approach of HTTP still necessary
- when the data is widely replicated?
-
- It would be painful to go reap all the references that
- point to expired articles, although if a user traversed to an expired
- article, perhaps it could be pulled off of tape or an NNTP superserver
- somewhere.
-
- Clearly the authors of WWW think news is important because WWW has
- nice capabilities for accessing NNTP servers. What, then, is the
- motivation for HTTP as opposed to, say, using news with HTML article
- bodies?
-
-
-